专利摘要:
A method of recognizing characters in an image of a document comprising at least one alphanumeric field, the method comprising the steps of: - enhancing a contrast of the image to bring out the characters present in the image; - detect contours of objects present in the image to create a mask highlighting the characters; - segment the image using a related component tree and applying the mask to extract the characters from the image; - perform character recognition on extracted objects. Device for implementing this method
公开号:FR3081244A1
申请号:FR1854140
申请日:2018-05-17
公开日:2019-11-22
发明作者:Segbedji Goubalan;Thierry Viguier
申请人:Idemia Identity and Security France SAS;
IPC主号:
专利说明:

The present invention relates to the field of image processing for the purpose of carrying out character recognition in any written document such as, for example, a transport ticket or an identity document.
Invention background
An identity document, such as a passport or a national identity card, includes -clamps of text containing in the form of sequences of alphanumeric characters, for example, name, first names, date- and place of birth of the identity document holder, as well as the name of the authority that issued the identity document and the date of issue.
Certain administrative operations) require having a facsimile of the document and re-entering the content of at least some of these fields. To speed up processing, it is known to digitize the document and extract: the content of the text fields using a computer program implementing a character recognition algorithm.
It is moreover known to attach security elements to these documents. make the falsification and unauthorized reproduction of this document more complex. These security elements are often present in the background of the document and include, for example, decorations or thin lines: lines forming patterns or characters.
However, it sometimes happens that these security elements, in particular when they are highly contrasting and border on a text field, are interpreted as being characters by the character recognition program. This results in errors prejudicial to the efficiency of the image processing applied to the documents and consequently to the completion of the administrative formalities.
Subject of the invention
An object of the invention is to provide a means for making character recognition more reliable, especially when the background is heterogeneous and / or when the background is not known a priori.
Brief description of the invention
To this end, provision is made, according to the invention, for a method for recognizing characters in an image of a document comprising at least one alphanumeric field, the method comprising the steps of:
- reinforce a contrast of the image to bring out the characters present in the image;
- detect the contours of objects present in the image to create a mask bringing out the characters ·;
segment the image using a tree with related components and applying the mask to it so as to extract the characters from the image;
- perform character recognition on the extracted objects.
The method of the invention makes it possible, without human intervention, to limit the influence of the background of the image and of the digitizing artifacts on the extraction of the characters; alphanumeric present · in the image, which improves the reliability of automatic character recognition. This also makes it possible to carry out character recognition even from a scan having a quality which would have been considered as; insufficient to achieve a
re cognizance ofprior art. The characters thanks to of the processes ofThe invention also have for object a device ofrecounti ss ance of character including a unit 5 IT (1) provided with means of her binding to a
digitizing apparatus arranged for carrying out a digitization of a written document ·, characterized in that the computer unit · (1) comprises at least one processor and a memory containing a program implementing the method · according to the invention.
Other characteristics and advantages of the invention will emerge on reading the following description of a particular, non-limiting embodiment of the invention.
Brief description of the drawings
Reference will be made to the attached drawings, among which:
Figure 1 is a schematic view of a device for implementing the method · of the invention;
- Figure 2 is a schematic view of an image of a document comprising characters recognizable by the method according to the invention;
FIG. 3 is a diagram showing the different stages of the method according to the invention;
- Figures 4. a and 4. b are detailed views of this image before and after a contrast enhancement;
Figures S. a- and 5.b are detailed views of this image before and after a contrast enhancement;
- Figures 5.c and 5.d are detailed views of this image during segmentation of the image by means of a mask.
Detailed description of a method of implementing the invention
Referring to Figure 1, the method of
The invention is implemented by means of a device comprising a computer unit 1 connected to a digitizing apparatus 5 arranged to carry out a digitization of a written document. The computer unit 1 is a computer which comprises at least one processor and a memory containing a program: image acquisition and a programmed implementing the method of the invention. The processor is arranged to execute these programs.
The scanning apparatus is for example a scanner 2 dedicated to the scanning of written document (commonly called flatbed scanner), or else an image sensor of a communication terminal such as a smartphone 3 (more commonly designated under its English name "smartphone") connectable to computer unit 1 via a network such as the Internet. The scanner 2 is here controlled directly by the computer unit 1 to acquire the image of the document. Alternatively, the scanner 2 can be connected to another computer unit which will control the acquisition of the image and will send the image to the computer unit 1 which will carry out the image processing and the character recognition properly · called. In the case of a capture: by the smartphone 3, the user controls the acquisition of the image of the written document: directly from the smartphone 3 then transmits this image to the computer unit 1 so that · The latter provides image processing and character recognition 30 proper. The digitizing apparatus is in all cases arranged to capture an image of the written document having a sufficient resolution to: make it possible to extract alphanumeric characters which would be present in the image and to recognize said characters.
The written document here is more particularly an identity document such as an identity card or a passport.
In Figure 2 is shown an image 10 of this identity document. Image 10 has been captured by the scanning device. In this image 10, it can be seen that the identity document includes a photograph of its holder and fields of alphanumeric characters, namely here a "Date" field 11 and a "City" field 12. Obviously, the identity document actually includes other fields of alphanumeric characters - such as fields "Name", "First names", "Date of birth", "Place of birth", "Nationality", "Addressed", "End of validity date" - which have not been shown here. In the following description, the word "characters" alone will be used to designate the alphanumeric characters. The identity document also includes security or decorative elements likely to interfere with the written characters · (not shown in Figure 2).
The method of the invention implemented by the program executed by the computer unit 1 comprises the following steps (FIG. 3):
- segment the image · to identify objects in it · (step 110);
defining a bounding box 20 around each object and making a first selection to select the bounding boxes supposedly containing a character as a function of at least one theoretical dimensional characteristic of an aLphanumeric character (step: 120);
make a second selection comprising the application to each selected ëhgiobahté box of shape descriptors and the implementation of a decision-making algorithm to select, on the basis of the descriptors, the bounding boxes- supposedly containing a character (step 130 );
- group the bounding boxes according to the relative depositions of the bounding boxes - (step 140);
make a third selection by dividing each of these bounding boxes into a plurality of cells for each of which a texture descriptor is determined in the form of a gradient gradient histogram, the histograms then being concatenated and üh algorithm · of decision making being implemented to select, on the basis of the descriptors, the bounding boxes containing 1-5 supposedly a -character (step 150);
- improve image contrast and detect contours of objects present in the image so as to create a mask highlighting the characters (step 16Q);
- segment the image by applying the mask to the image to extract the objects visible through the mask (step 170;);
- perform a character recognition on the finally selected bounding boxes (-step 18 0).
These steps will now be detailed.
Step 110 here consists in applying to the image an alternating sequential filter which is a mathematical morphological filter. In practice, the program scans the image with a geometric window (commonly called structuring element} which is in the shape of a circle (but · which could be rectangular or even linear or other ·) of 5 to 10 pixels in radius and eliminates all that which fits entirely in said window (operation commonly called erosion) and expands any part of an object that does not fit fully: in · window. Given the dimensions of the window, a character will not fit fully in inside the window and will be : therefore dilated, the rest is necessary noise and is eliminated. Preferably, several passes are made by increasing between each dimension of the window to filter gradually the noise of the image. , this step can be carried out by implementing an algorithm of MSER type (from the English “Maximally stable extremal regions”) or by filtering the image using a corresponding threshold. ndant a une: theoretical intensity of a character (when · the threshold is reached, the object is considered to be a character; when the threshold is not reached, the object is not a character).
At the end: from this step, the program therefore brought out objects (which one could also call Components; related) which include alphanumeric characters as well as other objects including elements which are not, like for example · security elements or: decoration. However, at this stage ·, a significant part of these unwanted elements have been excluded.
In step 120, on each of the objects remaining in the image, the program applies a 2: 0 bounding box (visible in FIG. 2) respecting several theoretical geometric criteria of the characters, namely: the height, the width and / or a ratio of dimensions (or AR of the English "aspect ratio"; height / width for example). If an object, and therefore its bounding box 20, have a height and a width (or a ratio thereof) corresponding to the theoretical ones of a character, it is an alphanumeric character. We can therefore select objects: which can correspond to:
characters · based on geometric criteria.
To automatically select the objects corresponding to alphanumeric characters in step 130, the program implements a decision-making algorithm (or more commonly called a classifier). On 5 each object previously selected, several types of shape descriptors are determined, namely here:
- the Fourier moments,
- the moments of Krawtchoük.
It is recalled that a moment is a formula applied 10 to a pixel or a set of pixels making it possible to describe the structure that we are trying to apprehend, namely here a character. Other descriptors could be used instead of or in addition - Fourier moments and / or Krawtchoük moments. However, the combined use of these two types of descriptors gives remarkable results.
The Fourier moments are used in a classifier (here of SVM type from the “Support Vector Machine”) in order to produce a first · character / non-character output.
Krawtchoük moments are used in a classifier (here again of SVM type) in order to produce a second character / non-character output.
These two outputs are then concatenated to form an input vector of a classifier (here again of SVM 'type) providing a third output. This third output · is compared to a threshold to provide a binary decision: "character" or "no character". Preferably, to form the input vector, the first · output and the second output · are weighted for each object for example according to the performance of the descriptors taking into account the type of background.
9 :
Following this operation, an image is obtained containing the objects devoid of most of the possible stains and noise initially present in the image, often due to the presence of the security or decorative elements of the document.
In step 140, the program operates - a grouping of characters - in the form of one or more - words - or lines of text according to geometric criteria which, in addition to the height, width and / or ratio of AR dimensions, include the centroids (or barycentres) of the bounding boxes 20 associated with each · character ^. More precisely, the program detects whether the centroids are aligned on the same line and calculates the distances separating the centroids: from bounding boxes 20 associated: with adjacent characters: to determine whether they: belong to the same word. The grouped characters are associated in a collective bounding box.
In step 1:50, the program examines: the content of each box: collective encompassing and eliminates those that do not seem to contain a field of: text. In fact, during the phases described above, it may happen that lines are unfortunately formed by grouping: objects of which at least one is not a character. This step therefore makes it possible: to eliminate false positives.
We know that - different text regions have distinct distributions of gradient orientations: the reason is that: the gradients: of high amplitude are generally perpendicular to the contours that form the characters. The program uses for this step a texture descriptor 'based on a histogram of: oriented gradient or HOC (from the English "Histogram of oriented gradient") which is known in text recognition. Conventionally:
the area to be recognized is subdivided into NI rows and Ne columns globally on the image,
- a histogram is calculated for each of the NlxNc cells,
- the histograms are concatenated with each other for the whole image.
According to the method of the invention, the program is advantageously arranged to subdivide the bounding box 20 of each object in 3 lines and 1 column 10 because this division makes it possible to significantly improve the decision “word” or “not word ". Thus, on each of the three cells of each bounding box 20 containing a priori a character is calculated a histogram. Histograms - are then concatenated to each other 15 and then introduced into a classifier (here again of the SVM type) to decide whether the collective bounding box corresponds to text. Note that the division is strongly dependent on the size of the characters. The bounding box 20 in which the division is made must be the size of each character (if the bounding box 20 of a character is 28 pixels x 28 pixels initially. but that the character occupies only 50% of it, we resize the box so that the character occupies all of it, then we do the cutting ).
In in step 160, the program proceeds, in each collective encompassing , to an analysis of color of the image (two parts of the image before carrying out this step are shown in Figures 4.a and 5.a): the objective here is to saturate the large differences in the image and d amplify small differences by saturating them flow channels (RGB, that is to say red, green, blue) · to bring out the color of the characters (in the case of a black and white image ·, we will act on the gray levels). For this, the program performs a contrast enhancement which
It consists in locally adapting the contrast of the image by · lateral inhibition - difference of neighboring pixels - weighted by the Euclidean distance between the pixels. Only the strongest 5 gradients are retained. Finally, the program also adapts the image in order to obtain an overall white balance (see the two parts of the image - after step 160 in Figures 4. b and 5. bj. This step improves contrast and color correction. Alternatively, a histogram equalization algorithm could have been used, but such an algorithm produces artifacts and artificial colors in the backgrounds which may complicate further processing of the image. picture.
Step 170 aims to remove: the background of the image in order to get rid of any background element therein such as elements of: security or decoration, likely to subsequently disturb the recognition of characters.
The previous step made it possible to improve the color of the image and to saturate the: black characters. It is therefore easier: to detect the outlines of the characters. The process of the invention implemented by the program uses for this purpose a contour detection filter and more particularly a Sobel filter.
The image obtained at output (figure 5.c) is then used as: a mask · in an approach of segmentation by: tree of connected components. In general, the trees of connected components associate with a gray level image, a descriptive data structure induced by an inclusion relation between the binary connected components: obtained by the successive application of the lines of:
level. The use of the mask makes it possible to select from the tree only what concerns the characters. This selection is made automatically so; that the segmentation by tree of connected components can be carried out: automatically, without human intervention, whereas, conventionally, the segmentation by tree of connected components: implements a process: interactive with an operator. The segmentation of a field by the method of the invention can thus be carried out much faster than with the conventional method. Tests carried out by the Applicant have shown that: the segmentation by the method of the invention was faster in a ratio greater than 60 or even 70. Thus, the segmentation according to the invention makes it possible to reduce the computation time.
The character recognition performed by the program in step 180 can implement any character recognition algorithm. More precisely, the program applies a model of segmentation and word recognition which is based on an architecture; deep learning based on a combination of convolutional neural networks (CNN) and LSTM (CNN from English Convolutional Neural Network, LSTM from English Long Short-Term Memory). In this case, the convolutional neural network gives · results
2) 5 particularly good because) the bottom of the image a) was eliminated before its implementation. This elimination of the background decreases the rate of false positives during OCR; and in particular avoids the appearance of ghost characters) / that is to say of patterns; 'issues): from the background and / or elements
3) 0 for security or decoration, these patterns having: a form close to that of a character and being erroneously recognized as being a character during OCR).
Preferably, a multiscale approach will be carried out as a variant. Indeed / the characters which are 35 larger than the window used during; step 110 are often over-segmented. To avoid this drawback, the method according to the invention provides for carrying out steps 110 and 120 at different resolutions, the dimensions of the window remaining identical. In practice, the program proceeds to several ^ scanning passes and reduced the resolution after each pass to eliminate each time all the objects which do not fit entirely, in the window but which have sizes smaller than that of a character. . By way of example, the initial resolution is 2000 × 2000 pixels and there are five decreases in resolution (the resolution is halved each time). A number of five decreases represents a good compromise between efficiency and calculation time,
Note that the criteria; relevant geometries for the grouping of characters and the choice of different parameters allowing to achieve a detection; effective words were selected in order to have a set of effective parameters for each 2: 0 type of image (depending on the range of wavelengths used for scanning: visible, IR and ÜV).
'Of course, the invention is not limited to the mode of implementation · described but encompasses any variant falling within the scope of the invention as defined in · the appended claims.
In particular, the process has been described in 'its most efficient version whatever the digitization device used.
For scanning by a flatbed scanner, the method of the invention may include only the following steps r
- enhance image contrast;
detect contours of objects present in the image to create a mask highlighting the characters ·;
segment the image by applying the mask to the image to extract the objects visible through the mask;
- perform character recognition on the objects and extracts ;.
For scanning by smartphone, the method of the invention may include only the following steps:
- segment the image to identify s of it in it;
- define a bounding box around each object and make a first selection to select the bounding boxes supposedly containing a character as a function of at least one theoretical dimensional characteristic of a character · alphanumeric;
making a second selection comprising the application to each selected bounding box of 20 descriptors of form · and the implementation of a decision-making algorithm to select, on the basis of the descriptors, the bounding boxes supposedly containing. a character ;
group the bounding boxes according to 25 relative positions of the bounding boxes;
- make a third selection by dividing each of these bounding boxes into a plurality of cells for each of which a texture descriptor is determined in the form of a histogram, oriented gradient; histograms then being combined and a decision-making algorithm implemented to select, on the basis of the descriptors, the bounding boxes containing a character;
- perform a character recognition on the bounding boxes: finally selected.
In all cases, the multiscale approach is optional.
It is possible to combine several classifiers. Qu to use other classifiers than · those indicated. Preferably, each classif used for them will be of a type 10 included in the following group SVM (from the English "Support Vector Machine"), RVM (from the English "Relevance Vector Machine"), K nearest neighbors ( or KNNj, Random 'Forest. We note, for example, that the classifiedür' RVM allows a probabilistic interpretation 15 allowing to have fewer examples for the learning phase.
It is possible to group by line or by word. One will take account for example of the type of document: thus on the documents of identity of British origin, there is sometimes between the letters of large spaces: which leaves the background very · apparent: it is more effective d '' group by word for this type of document.
For step 150, other divisions can be envisaged, in particular 1 column and 7 rows.
Images can be processed in color or grayscale ·. In grayscale, the use of the mask eliminates a large number of parasitic elements.
Alternatively, several others; segmentation solutions could have been envisaged as global or adaptive thresholding, a mixture of Gaussians or any other technique in order to effectively isolate the characters of the image.
Krawtchouk moments can be used alone or in combination with other types of moment and for example form descriptors also based on from among the following moments: Fourier, Legendre, Zernike, Hu moments and descriptors extracted by a LéNet type convolutional neural network. It will be noted that · the Krawtchouk moments become efficient descriptors for the characters by using polynomials of order 9 while polynomials of order 16 are necessary for the moments of Legendre, 17 for the) moments of Zernike and more than 30 for Fourier moments.
It will be noted that the method of the invention is particularly well suited for the processing of documents having heterogeneous backgrounds. The process can be implemented in the same way for: processing documents: having funds: homogeneous. It is also possible to provide for a preliminary step of determining whether the background of the document is homogeneous and, if so, passing the steps of contour detection and segmentation by mask. This segmentation is especially useful because it eliminates a large part of the background of the document which could impair character recognition. However, with a homogeneous background, this risk is limited. Another type of segmentation can possibly be envisaged.
The device may have a structure different from that described. The image acquisition program can in particular be stored in a memory of the capture device to be executed directly by it. The device and the capture member can be incorporated in the same device.
权利要求:
Claims (10)
[1" id="c-fr-0001]
1. A method for recognizing characters in an image of a document comprising at least one alphanumeric field, the method comprising the steps of:
- reinforce a contrast of the image to bring out the characters present in the image, · detect contours: of objects present in the image to create a mask bringing out the characters;
- segment the image using a tree with connected components and applying the mask to it so as to extract the characters from the image;
- perform a: character recognition on the extracted objects.
[2" id="c-fr-0002]
2. The method of claim 1, wherein the character recohnaïssahce is performed by a neural network.
[3" id="c-fr-0003]
3. Method according to claim 2 :, wherein the neural network is of the convolution type.
[4" id="c-fr-0004]
4. Method according to any one of the preceding claims, in which the document has a heterogeneous background.
[5" id="c-fr-0005]
5. Method according to any one of claims 1 to 3, comprising a preliminary step of determining whether the background of the document is homogeneous and, if so, passing the steps of detection of contour: and of segmentation by mask.
[6" id="c-fr-0006]
6. Method - according to any one of the preceding claims :, in which the contrast enhancement is obtained by locally adapting the contrast of the image between neighboring pixels taking into account a difference between pixels weighted by a Euclidean distance between them.
[7" id="c-fr-0007]
7. The method of claim 6, wherein the adaptation is carried out in order to obtain a balance
5 overall white.
[8" id="c-fr-0008]
8. Method according to any one of the preceding claims, the edge detection is carried out: by applying a Sobel filter · to the image.
[9" id="c-fr-0009]
9. The method according to any one of the preceding claims, wherein the neural network is of the convolution type and with short and long term memory.
[10" id="c-fr-0010]
10. Character recognition device comprising a computer unit (1) for view: means of its connection to a digitizing apparatus · arranged to carry out a digitization of a written document, characterized in that the computer unit (1) comprises • at least one processor and a memory containing a program implementing the method according to any one of the preceding claims.
类似技术:
公开号 | 公开日 | 专利标题
EP3570212A1|2019-11-20|Character recognition method
CA3043090C|2021-07-27|Character recognition process
EP1298588B1|2011-08-03|Image processing system for the automatic extraction of semantic elements
Cornelis et al.2013|Crack detection and inpainting for virtual restoration of paintings: The case of the Ghent Altarpiece
CA2987846A1|2018-06-07|Image processing system
BE1025504B1|2019-03-27|Pattern recognition system
FR2905188A1|2008-02-29|Input image e.g. palm imprint image, density converting method for e.g. image processing improving system, involves converting minimum local and maximum local values into minimum and maximum common values, and reconstructing input image
WO2013098512A1|2013-07-04|Method and device for detecting and quantifying cutaneous signs on an area of skin
WO2004013802A2|2004-02-12|Method and system for automatically locating text areas in an image
EP0910035A1|1999-04-21|Method for automatic extraction on printed text or handwriting on a background, in a multi-level digital image
WO2019129985A1|2019-07-04|Method for forming a neural network for the recognition of a sequence of characters, and associated recognition method
BE1021013B1|2014-12-11|METHOD AND SYSTEM FOR IMPROVING THE QUALITY OF COLOR IMAGES
EP1840799A1|2007-10-03|Method using the multi-resolution of images for optical recognition of postal shipments
EP3832535A1|2021-06-09|Method for detecting at least one visible element of interest in an input image by means of a convolutional neural network
EP1684210B1|2009-08-26|Red-eye detection based on skin region detection
FR3105529A1|2021-06-25|A method of segmenting an input image representing a document containing structured information
FR3047832B1|2019-09-27|METHOD FOR DETERMINING A COLOR VALUE OF AN OBJECT IN AN IMAGE
FR3091610A1|2020-07-10|Method for processing digital images
FR3095286A1|2020-10-23|Process for image processing of an identity document
EP1973336B1|2009-10-21|Method of automatic detection and correction of visible red eyes in a photograph
FR3057690A1|2018-04-20|METHOD OF COMPLETING AN IMAGE OF A SCENE UTILIZING THE SPECTRAL CHARACTERISTICS OF THE SCENE
FR3049094A1|2017-09-22|METHOD OF IDENTIFYING A SET OF ROW BOOKS TO AUTOMATICALLY ESTABLISH THEIR LIST
WO2008031978A1|2008-03-20|Method of framing an object in an image and corresponding device
FR2910670A1|2008-06-27|Natural image binarizing method, involves constructing binary text image by neural architecture having heterogeneous artificial neurons layers in iterative manner, and binarizing image comprising text by image construction operation
FR2982057A1|2013-05-03|Method for recognition of playing card image acquired by video camera in scene, involves identifying image with reference image in event of success of search of reference vector near to signature vector in data base of memory
同族专利:
公开号 | 公开日
US11151402B2|2021-10-19|
US20190354791A1|2019-11-21|
FR3081244B1|2020-05-29|
EP3570212A1|2019-11-20|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
EP3007105A1|2014-10-10|2016-04-13|Morpho|Method for identifying a sign on a deformed document|
US8805110B2|2008-08-19|2014-08-12|Digimarc Corporation|Methods and systems for content processing|
US9014481B1|2014-04-22|2015-04-21|King Fahd University Of Petroleum And Minerals|Method and apparatus for Arabic and Farsi font recognition|
MX2019006851A|2016-12-16|2019-11-21|Kurz Digital Solutions Gmbh & Co Kg|Verification of a security document.|
US10643333B2|2018-04-12|2020-05-05|Veran Medical Technologies|Apparatuses and methods for navigation in and Local segmentation extension of anatomical treelike structures|FR3081245B1|2018-05-17|2020-06-19|Idemia Identity & Security France|CHARACTER RECOGNITION PROCESS|
EP3572972A1|2018-05-23|2019-11-27|IDEMIA Identity & Security Germany AG|Extendend convolutional neural network for document analysis|
US11087448B2|2019-05-30|2021-08-10|Kyocera Document Solutions Inc.|Apparatus, method, and non-transitory recording medium for a document fold determination based on the change point block detection|
CN110889379A|2019-11-29|2020-03-17|深圳先进技术研究院|Expression package generation method and device and terminal equipment|
CN111243050A|2020-01-08|2020-06-05|浙江省北大信息技术高等研究院|Portrait simple stroke generation method and system and drawing robot|
法律状态:
2019-04-18| PLFP| Fee payment|Year of fee payment: 2 |
2019-11-22| PLSC| Publication of the preliminary search report|Effective date: 20191122 |
2020-04-22| PLFP| Fee payment|Year of fee payment: 3 |
2021-04-21| PLFP| Fee payment|Year of fee payment: 4 |
优先权:
申请号 | 申请日 | 专利标题
FR1854140|2018-05-17|
FR1854140A|FR3081244B1|2018-05-17|2018-05-17|CHARACTER RECOGNITION PROCESS|FR1854140A| FR3081244B1|2018-05-17|2018-05-17|CHARACTER RECOGNITION PROCESS|
EP19174226.1A| EP3570212A1|2018-05-17|2019-05-13|Character recognition method|
US16/415,631| US11151402B2|2018-05-17|2019-05-17|Method of character recognition in written document|
[返回顶部]